Large-Scale Learning with Less RAM via Randomization
نویسندگان
چکیده
We reduce the memory footprint of popular large-scale online learning methods by projecting our weight vector onto a coarse discrete set using randomized rounding. Compared to standard 32-bit float encodings, this reduces RAM usage by more than 50% during training and by up to 95% when making predictions from a fixed model, with almost no loss in accuracy. We also show that randomized counting can be used to implement percoordinate learning rates, improving model quality with little additional RAM. We prove these memory-saving methods achieve regret guarantees similar to their exact variants. Empirical evaluation confirms excellent performance, dominating standard approaches across memory versus accuracy tradeoffs.
منابع مشابه
Diabetic Retinopathy Detection via Deep Convolutional Networks for Discriminative Localization and Visual Explanation
We proposed a deep learning method for interpretable diabetic retinopathy (DR) detection. The visualinterpretable feature of the proposed method is achieved by adding the regression activation map (RAM) after the global averaging pooling layer of the convolutional networks (CNN). With RAM, the proposed model can localize the discriminative regions of an retina image to show the specific region ...
متن کاملPractical Analysis of RSA Countermeasures Against Side-Channel Electromagnetic Attacks
This paper analyzes the robustness of RSA countermeasures against electromagnetic analysis and collision attacks. The proposed RSA cryptosystem uses residue number systems (RNS) for fast executions of the modular calculi with large numbers. The parallel architecture is protected at arithmetic and algorithmic levels by using the Montgomery Ladder and the Leak Resistant Arithmetic countermeasures...
متن کاملBig Learning with Little RAM
In large-scale machine learning, available memory (RAM) is often a key constraint, both during model training and when making new predictions. In this paper, we reduce memory cost by projecting our weight vector β ∈ R onto a coarse discrete set using randomized rounding. Because the values of the discrete set can be stored more compactly than standard 32-bit float encodings, this reduces RAM us...
متن کاملMachine Learning with Memristors via Thermodynamic RAM
Thermodynamic RAM (kT-RAM) is a neuromemristive co-processor design based on the theory of AHaH Computing and implemented via CMOS and memristors. The co-processor is a 2-D array of differential memristor pairs (synapses) that can be selectively coupled together (neurons) via the digital bit addressing of the underlying CMOS RAM circuitry. The chip is designed to plug into existing digital comp...
متن کاملFat-Fast VG-RAM WNN: A high performance approach
The Virtual Generalizing Random Access Memory Weightless Neural Network (VGRAM WNN) is a type of WNN that only requires storage capacity proportional to the training set. As such, it is an effective machine learning technique that offers simple implementation and fast training – it can be made in one shot. However, the VG-RAM WNN test time for applications that require many training samples can...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2013